Prototyping The NkGraphics Tutorials

In this section, we will discover what the editor is capable of. For that, we will reproduce what the tutorials within nkGraphics end up with, in the tutorial 07 about composition.

This will let us witness how transparently the API can be driven from the UI, and how easily we will be able to do that.

Launch

The editor is available in the Bin folder of the release archive. Once launched, you will be greeted by something like :

The editor
The editor !

A quick tour :

And that goes for the initial interface tour. From there, it is possible to access everything the editor has to offer.

Toying with resources

Let's start by loading the basic resources we need. We need :

Loading the mesh

Let's load the mesh. Open the Resources menu, and select the Meshes item :

Select the mesh item
Resources menu, Meshes item

This will make another window pop :

Mesh interface
This is the meshes resources manipulation window

This design is used for resource windows in general :

For the meshes, in the version on which this tutorial is based, you can rename them, save their declaration files, change their source data, and witness / alter their attributes. Let's first create a resource. For that, click on the little "+" button you will find in the bottom left part of the window, under the resource list. This will open a sub window :

Mesh creation
Resource creation window

Once more, you will find this window for each resource type available. On the top, you can specify a declaration file to load from to create the resource(s). The bottom part allows you to create a resource from scratch by specifying its name. Let's do that, by entering its name "Sphere", and hitting enter or the button next to the field. It is now in the resource list and if we click on it :

Mesh created
Our mesh in the interface

As you can see, the interface is now filled with what the mesh has to offer to us. It is named "Sphere", has no data path set yet, and its data has only the vertex position. While it can be surprising, this is true in this context : our mesh can be used right away, albeit the fact it is empty. But that would not be very fun to see, so let's search for a file to load. Hit the "..." button within the Source group, and search for "sphere.obj", within the Data/Meshes folder of the release. Once done, the interface will be updated :

Mesh loaded
New information

The Source data path is now updated. Also, we witness new attributes provided by the mesh : positions, normals, uvs. This is what the mesh has available in its data, and it has been detected by the editor. API wise, what we did is create a mesh, and set its resource path. The loading is managed by the editor itself.

Adding it to the scene tree

Next step will be to add it to the scene. For that, right click within the Scene Tree list :

Alter the scene tree
Right clicking opens a new menu

Altering the scene tree goes through right clicking on items within the list. If you click on "Add Node", you will get a new item in the list, Node_0. Right clicking on this new item will hold new possibilities. Select "Add Renderable", and on this new item right click again to "Append Mesh". In the list window popping, select our mesh, "Sphere", and hit ok.

The scene tree list should look like :

Scene tree final state
Final tree state we should expect

This hides, if we were doing it through the API, setting an entity in the render queue, adding a mesh in its sub entity, and pluging it to a node. And believe it or not, but the sphere is actually rendered in the 3D Graphics window !

The input pattern within the scene

If you read the nkGraphics tutorial series, you probably remember that the sphere is centered around 0, which is exactly where our camera starts. In the editor, no need to move the node, however : we will move the camera rather. For that, we will use the input pattern available in the editor :

This camera controller scheme is quite similar to the way you would control a free camera within a First Person Shooter game. So, to move our camera away from the sphere, we need to :

You should see something similar to :

Visible sphere
The sphere as seen originally

By then, using this reference, feel free to test a bit the controller and how it feels. Once ready, let's proceed !

Working with the texture

With this resource creation process in mind, it should be fairly easy to create the texture. Open the window, click the "+" button, specify the name, and select it within the list. Next thing we want to do is load the cubemap. For that, click on the "Open File..." button in the bottom right and search for the Data/Textures/YokohamaNight.dds texture.

Loading a texture source
Open a texture file

Once done, you should be able to see that the type of the texture has been updated to be a TEX_CUBEMAP. The texture is ready to be used !

Creating the programs

Like any other resources, we need to create it through the dedicated "Shader Programs" window. Let's create 2 programs, one for the sphere and one for the environment behind. We will first work with the one for the sphere.

When selecting it, the program will be flagged by default as a "Pipeline" program. This means that it is to be used for scene painting. Basically, when specifying a program in the editor, it needs to be of a certain type, depending on the usage you will have for it :

Now, we want to assign this shader to an object, so let's keep it as a pipeline program. In each group, you will notice the program stages that can be written (vertex, domain...). We will need the vertex and pixel stages. Let's edit the vertex stage, by clicking the "Edit" button next to it :

Editing a program
Click the edit button next to the vertex stage for a pipeline program

A new window will pop, with a default shader sources prefilled :

Editing a program sources
Source edition window

On the top, each tab corresponds to a stage that can be edited. When switching, you will find the current active source for the program stage. The macros group allow you to specify macros to the program compilation. The sources group speaks for itself, and holds the sources of the program. The "Nothing to show." part is the compilation log window, notifying any error, if any, while recompiling using the associated button.

Let's edit the sources of our vertex stage :

cbuffer PassBuffer : register(b0) { matrix view ; matrix proj ; float3 camPos ; } struct VertexInput { float4 position : POSITION ; float3 normal : NORMAL ; matrix world : WORLDMAT ; } ; struct PixelInput { float4 position : SV_POSITION ; float3 normal : NORMAL ; float3 camDir : CAMDIR ; } ; PixelInput main (VertexInput input) { PixelInput result ; matrix mvp = mul(input.world, mul(view, proj)) ; result.position = mul(input.position, mvp) ; result.normal = input.normal ; result.camDir = normalize(mul(input.position, input.world) - camPos) ; return result ; }

Now switch to the pixel stage, and change its sources to :

struct PixelInput { float4 position : SV_POSITION ; float3 normal : NORMAL ; float3 camDir : CAMDIR ; } ; TextureCube tex : register(t0) ; SamplerState customSampler : register(s0) ; float4 main (PixelInput input) : SV_TARGET { float3 sampleDir = reflect(input.camDir, normalize(input.normal)) ; return tex.Sample(customSampler, sampleDir) ; }

And hit the recompile button. The compilation should run fine, as reported by the log window :

Compilation result
Successful recompilation of the program !

Creating the shader

For the program to be used, we need the shader. Let's create it through its dedicated interface, with the usual process. Once selected in the list, select its program through the "..." in the program group, depending on the name provided. Next we need to setup the resources it provides, and that goes through the tab window occupying the bottom right part.

Process is as such : add a resource through the "+" button, it takes a default value. Click on it, and change whatever parameter needed through the right part of the window. For instance, for our texture, add it, click on it, and click the "..." button after the Texture label. Select the texture we loaded earlier, and validate.

Shader parameterized
Shader now feeds the texture we want

Sampler creation can just be added as is, as we don't need to alter the way it's made.

We then need to add a constant buffer. Using the same process, we add a buffer, and click on it to unveil the slots inside. A freshly created buffer has no slot, so we need to add some again from its dedicated list. Once we click on one slot, we can see its type in the combo box under it. Search for the types so make them correspond to the input the HLSL expects, with no need to edit any parameter other than the type :

Final constant buffer setup
Constant buffer aligned with the HLSL version

Finally, go into the Instance Buffers tab and add a slot, which will automatically be the World Matrix we need.

Shader assignment

Now we need to use the shader with the mesh currently shown. You probably saw it passing by, what we now need is to set the shader via the scene tree. Right click on the Entity, and assign the pipeline shader we just created through the list.

Assign a shader through the scene tree
Shader assignment through the scene tree

The rendering in the 3D window should now be drastically different :

Reflections on the sphere
Our shader is now used to render the sphere

Creating the post processing shader

What is left before altering the composition is to create and setup the shader and program for the background. For that, open the dedicated windows again. We begin with the program.

Open the window, create a new program, leave it as a pipeline program and open the sources for the vertex stage. Replace them by :

cbuffer constants { float4 camDir [4] ; } struct VertexInput { float4 position : POSITION ; uint vertexId : SV_VertexID ; } ; struct PixelInput { float4 position : SV_POSITION ; float4 camDir : CAMDIR ; } ; PixelInput main (VertexInput input) { PixelInput result ; result.position = input.position ; result.camDir = camDir[input.vertexId] ; return result ; }

Switch to the pixel stage, and replace the sources by :

struct PixelInput { float4 position : SV_POSITION ; float4 camDir : CAMDIR ; } ; TextureCube envMap : register(t0) ; SamplerState customSampler : register(s0) ; float4 main (PixelInput input) : SV_TARGET { return envMap.Sample(customSampler, normalize(input.camDir)) ; }

We won't go over it again here, please consult the tutorial about composition (07) within nkGraphics for that. Hit the recompile button, and ensure everything is done without any problem.

It is time to create the attached shader. In its window, create a new shader, set its program, the one that has been created. Update the constant buffer for it to give the camera direction in world space, add the texture and the sampler again.

With this, the shader is ready ! What is left is creating the compositor, and using it for rendering.

Creating the compositor

Compositors are like any other resource, and we need to open its window to manipulate them. This window might seem more complex on first sight, but it directly maps over what the API offers. As such, if you know the API, you should be able to understand what it is about and manipulate it right away. If not, let's see the window in detail :

Compositor Window
Compositor window

The window presents all aspects of a compositor, presented in one window. It reads from up left to down right.

Left is the list of compositors, as for every resources. Then comes the name control, and the heart of the compositor API. From there, you can add a node to the list. Upon selection, the node settings, activity and target operations, become visible in the next group box. Adding a target operation and selecting it leads to the next group, from which you can set the targets, viewport, and passes. Adding a pass and selecting it leads to the next group again, at the start of next line. Its type can be changed, and depending on its type, adapted settings can be tweaked within the last group on the right.

Once a compositor fits your need, its declaration can be saved through the dedicated group button.

From the UI, we need to reproduce the compositor done within the tutorial of nkGraphics. The window demonstrated above showcases a major part of what is needed :

  1. Create a compositor, select it in the list
  2. Add a node in dedicated group, select it
  3. Add a target operation in the list, select it
  4. Add a color target, select the back buffer in the list, corresponding to a context back buffer NILKINS_BACK_BUFFER
  5. Set the depth target, as the depth buffer of the context, NILKINS_DEPTH_BUFFER
  6. Add 3 passes :
    1. Clear pass. No need to edit it after adding it, default is clear pass fitting our needs
    2. Scene pass. Select the Scene pass type, leave its default settings, it will pick the render queue our sphere is in, by default
    3. Post process pass. Select the Post pass type, change the shader to be the one rendering the environment, and tick the back process box to make it render behind the mesh

Once setup, we need to tell our editor to use the compositor for its active rendering. For this we need to access the rendering logic window through the Scene menu :

Scene Menu
Changing the logic goes through this sub menu

we are greeted by this window, in which we need to change the active compositor and select the one we created in the list :

Compositor change interface
In this interface, change the active compositor

Once this is done, the rendering window should reflect that :

Final image
Complete rendering override

Saving our work for later

Now that we've done all of this and are satisfied with the way it looks, it would be wise to save it to be able to reopen it quickly and iterate over it.

Access the File menu, Save item. In the browser created, find the folder in which you want to save, and validate. The saving process will create some files and folders, it is advised to save in a dedicated folder to keep structure clear.

And that's it ! The current setup is saved. Now, when opening the editor again, just hit File > Load and open back the project file saved. The scene will be loaded again and you will be able to start from where you left off.

To conclude

The editor is driving the API in a very transparent way. However low level this can seem, this allows big control over the rendering we want. The UI on top of that adds a lot of comfort, and allows to prototype and iterate way quicker than during a standard C++ application development process.

And with that, the editor still has some functionalities we have to unveil. Isn't that pretty ?